224 research outputs found

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Incidental learning of trust from eye-gaze: Effects of race and facial trustworthiness

    Get PDF
    Humans rapidly make inferences about individuals’ trustworthiness on the basis of their facial features and perceived group membership. We examine whether incidental learning about trust from shifts in gaze direction is influenced by these facial features. To do so, we examined two types of face category: the race of the face and the initial trustworthiness of the face based on physical appearance. We find that cueing of attention by eye-gaze is unaffected by race or initial levels of trust, whereas incidental learning of trust from gaze behaviour is selectively influenced. That is, learning of trust is reduced for other race faces, as predicted by reduced abilities to identify members of other races (Experiment 1). In contrast, converging findings from an independently gathered set of data showed that the initial trustworthiness of faces did not influence learning of trust (Experiment 2). These results show that learning about the behaviour of other race faces is poorer than for own-race faces, but that this cannot be explained by differences in the perceived trustworthiness of different groups

    Ocular Point-of-Care Ultrasonography to Diagnose Posterior Chamber Abnormalities: A Systematic Review and Meta-analysis

    Get PDF
    Importance: Diagnosing posterior chamber ocular abnormalities typically requires specialist assessment. Point-of-care ultrasonography (POCUS) performed by nonspecialists, if accurate, could negate the need for urgent ophthalmologist evaluation. Objective: This meta-analysis sought to define the diagnostic test characteristics of emergency practitioner-performed ocular POCUS to diagnose multiple posterior chamber abnormalities in adults. Data sources: PubMed (OVID), MEDLINE, EMBASE, Cochrane, CINAHL, and SCOPUS were searched from inception through June 2019 without restrictions. Conference abstracts and trial registries were also searched. Bibliographies of included studies and relevant reviews were manually searched, and experts in the field were queried. Study selection: Included studies compared ocular POCUS performed by emergency practitioners with a reference standard of ophthalmologist evaluation. Pediatric studies were excluded. All 116 studies identified during abstract screening as potentially relevant underwent full-text review by multiple authors, and 9 studies were included. Data extraction and synthesis: In accordance with PRISMA guidelines, multiple authors extracted data from included studies. Results were meta-analyzed for each diagnosis using a bivariate random-effects model. Data analysis was performed in July 2019. Main outcomes and measures: The outcomes of interest were diagnostic test characteristics of ocular POCUS for the following diagnoses: retinal detachment, vitreous hemorrhage, vitreous detachment, intraocular foreign body, globe rupture, and lens dislocation. Results: Nine studies (1189 eyes) were included. All studies evaluated retinal detachment, but up to 5 studies assessed each of the other diagnoses of interest. For retinal detachment, sensitivity was 0.94 (95% CI, 0.88-0.97) and specificity was 0.94 (95% CI, 0.85-0.98). Sensitivity and specificity were 0.90 (95% CI, 0.65-0.98) and 0.92 (95% CI, 0.75-0.98), respectively, for vitreous hemorrhage and were 0.67 (95% CI, 0.51-0.81) and 0.89 (95% CI, 0.53-0.98), respectively, for vitreous detachment. Sensitivity and specificity were high for lens dislocation (0.97 [95% CI, 0.83-0.99] and 0.99 [95% CI, 0.97-1.00]), intraocular foreign body (1.00 [95% CI, 0.81-1.00] and 0.99 [95% CI, 0.99-1.00]), and globe rupture (1.00 [95% CI, 0.63-1.00] and 0.99 [95% CI, 0.99-1.00]). Results were generally unchanged in sensitivity analyses of studies with low risk of bias. Conclusions and relevance: This study suggests that emergency practitioner-performed ocular POCUS is an accurate test to assess for retinal detachment in adults. Its utility in diagnosing other posterior chamber abnormalities is promising but needs further study
    • …
    corecore